Goto

Collaborating Authors

 stop ai


The Strange Disappearance of an Anti-AI Activist

The Atlantic - Technology

Sam Kirchner wants to save the world from artificial superintelligence. He's been missing for two weeks. B efore Sam Kirchner vanished, before the San Francisco Police Department began to warn that he could be armed and dangerous, before OpenAI locked down its offices over the potential threat, those who encountered him saw him as an ordinary, if ardent, activist. Phoebe Thomas Sorgen met Kirchner a few months ago at Travis Air Force Base, northeast of San Francisco, at a protest against immigration policy and U.S. military aid to Israel. Sorgen, a longtime activist whose first protests were against the Vietnam War, was going to block an entrance to the base with six other older women. Kirchner, 27 years old, was there with a couple of other members of a new group called Stop AI, and they all agreed to go along to record video on their phones in case of a confrontation with the police.


Not Even Lawsuits Can Stop AI

Slate

Candice Lim and Kate Lindsay are joined by Slate senior tech editor Tony Ho Tran to parse through what Meta's victory in a recent AI lawsuit means for its users. Tools like ChatGPT are becoming more common at home and at work, but without protections, could threaten not just the creativity of artists, but anyone who posts online. As regulation lags behind, how can we protect ourselves? And how many of us are using AI without even knowing it? This podcast is produced by Daisy Rosario, Vic Whitley-Berry, Candice Lim, and Kate Lindsay.


The big idea: can we stop AI making humans obsolete?

The Guardian

Right now, most big AI labs have a team figuring out ways that rogue AIs might escape supervision, or secretly collude with each other against humans. But there's a more mundane way we could lose control of civilisation: we might simply become obsolete. This wouldn't require any hidden plots – if AI and robotics keep improving, it's what happens by default. Well, AI developers are firmly on track to build better replacements for humans in almost every role we play: not just economically as workers and decision-makers, but culturally as artists and creators, and even socially as friends and romantic companions. What place will humans have when AI can do everything we do, only better?


Protesters Are Fighting to Stop AI, but They're Split on How to Do It

WIRED

On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. When do we want it?" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and a handful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit--a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message. "The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." The group's main demand is for a pause on the training of AI systems more powerful than GPT-4--it's calling for all countries to implement this measure, but specifically calls out the United States as the home of most leading AI labs. The group also wants all UN member states to sign a treaty that sets up an international AI safety agency with responsibility for granting new deployments of AI systems and training runs of large models. Their protests are taking place on the same day as OpenAI announced a new version of ChatGPT to make the chatbot act more like a human. "We have banned technology internationally before," says Meindertsma, pointing to the Montreal Protocol, a global agreement finalized in 1987 that saw the phaseout of CFCs and other chemicals known to deplete the ozone layer. "We've got treaties that ban blinding laser weapons.


Kamala Harris: Admin has duty to stop AI 'algorithmic discrimination,' ensure benefits 'shared equitably'

FOX News

AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities. Vice President Kamala Harris said Monday that it's the Biden administration's "duty" to prevent "algorithmic discrimination" when it comes to the field artificial intelligence (AI), and to ensure its benefits are "shared equitably" among society. Her continuation of what some have called the administration's effort to make AI "woke" happened during her remarks alongside President Biden at the White House just before he signed an executive order establishing AI standards for private companies. "I believe we have a moral, ethical and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensure that everyone is able to enjoy its benefits. Since we took office, President Biden and I have worked to uphold that duty," Harris told a crowd gathered in the White House's East Room.


Ways To Stop AI From Recognizing Your Face In Selfies

#artificialintelligence

Fawkes may prevent a new facial recognition system from recognizing a person but can't change or sabotage the existing systems that have already been trained on one's unprotected images. Thus, Valeriia Cherepanova and her colleagues at the University of Maryland, one of the teams at ICLR, recently addressed this issue and developed a tool called LowKey. This tool expands on Fawkes by applying perturbations to images based on a stronger adversarial attack, which can also fool the pretrained commercial models.


How to stop AI from recognizing your face in selfies

MIT Technology Review

A number of AI researchers are pushing back and developing ways to make sure AIs can't learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference. "I don't like people taking things from me that they're not supposed to have," says Emily Wenger at the University of Chicago, who developed one of the first tools to do this, called Fawkes, with her colleagues last summer: "I guess a lot of us had a similar idea at the same time." Actions like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models. But these efforts typically require collective action, with hundreds or thousands of people participating, to make an impact.


How to stop AI from perpetuating harmful biases

#artificialintelligence

Artificial Intelligence (AI) is already re-configuring the world in conspicuous ways. Data drives our global digital ecosystem, and AI technologies reveal patterns in data. Smartphones, smart homes, and smart cities influence how we live and interact, and AI systems are increasingly involved in recruitment decisions, medical diagnoses, and judicial verdicts. Whether this scenario is utopian or dystopian depends on your perspective. The potential risks of AI are enumerated repeatedly.


In 2020, let's stop AI ethics-washing and actually do something

#artificialintelligence

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. It's hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect people's privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them?